perm filename CHAP4[4,KMC]22 blob
sn#061023 filedate 1973-08-31 generic text, type T, neo UTF8
00100 LANGUAGE-RECOGNITION PROCESSES FOR UNDERSTANDING DIALOGUES
00200 IN TELETYPED PSYCHIATRIC INTERVIEWS
00300
00400 Since the behavior being simulated by this paranoid model is
00500 the sequential language-behavior of a paranoid patient in a
00600 psychiatric interview, the model must have an ability to interpret
00700 and respond to natural language input sufficient to demonstrate
00800 conduct characteristic of the paranoid mode. By "natural language"
00900 I shall mean ordinary American English such as is used in everyday
01000 conversations. It is still difficult to be explicit about the
01100 processes which enable humans to interpret and respond to natural
01200 language. (A mighty maze ! but not without a plan - A. Pope).
01300 Philosophers, linguists and psychologists have investigated natural
01400 language with various purposes. Few of the results have been useful
01500 to builders of interactive simulation models. Attempts have been
01600 made in artificial intelligence to write algorithims which
01700 "understand" teletyped natural language expressions. (Colby and
01800 Enea,1967; Enea and Colby,1973; Schank,1973; Winograd,1973; Woods,
01900 1970). Computer understanding of natural language is actively being
02000 attempted today but it is not something to be completly achieved
02100 today or even tomorrow. The problem at the moment is not to find
02200 immediately the best way of doing it but to find any way at all.
02300 During the 1960's when machine processing of natural language
02400 was dominated by syntactic considerations, it became clear that
02500 syntactical information alone was insufficient to comprehend the
02600 expressions of ordinary conversations. A current view is that to
02700 understand what is said in linguistic expressions, knowledge of
02800 syntax and semantics must be combined with beliefs from a conceptual
02900 structure capable of making inferences. How to achieve this
03000 combination efficiently with a large data-base represents a
03100 monumental task for both theory and implementation.
03200 For performance reasons we did not attempt to construct a
03300 conventional linguistic parser to analyze conversational language of
03400 interviews. Parsers to date have had great difficulty in performing
03500 well enough to assign a meaningful interpretation to the expressions
03600 of everyday conversational language in unrestricted English. Purely
03700 syntactic parsers offer a cancerous proliferation of interpretations.
03800 A conventional parser, lacking neglecting and ignoring mechanisms,
03900 may simply halt when it comes across a word not in its dictionary.
04000 Parsers represent tight conjunctions of tests instead of loose
04100 disjunctions needed for gleaning some degree of meaning from everyday
04200 language communication. It is easily observed that people
04300 misunderstand and ununderstand at times and thus remain partially
04400 opaque to one another, a truth which lies at the core of human life
04500 and communication.
04600 How language is understood depends on how people interpret
04700 the meanings of situations they find themselves in. In a dialogue,
04800 language is understood in accordance with a participant's view of the
04900 situation. The participants are interested in both what an utterance
05000 means (what it refers to) and what the utterer means ( his
05100 intentions). In a first psychiatric interview the doctor's intention
05200 is to gather certain kinds of information; the patient's intention is
05300 to give information in order to receive help. Such an interview is
05400 not small talk; a job is to be done. Our purpose was to develop a
05500 method for recognizing sequences of everyday English sufficient for
05600 the model to communicate linguistically in a paranoid way in the
05700 circumscribed situation of a psychiatric interview.
05800 We did not try to construct a general-purpose algorithm which
05900 could understand anything said in English by anybody to anybody in
06000 any dialogue situation. (Does anyone believe it possible? The
06100 seductive myth of generalization leads only to trivialization). We
06200 sought simply to extract, distill or cull some degree of, or partial
06300 idiosyncratic, idiolectic meaning (not the "complete" meaning,
06400 whatever that means) from the input. We utilized a pattern-directed,
06500 rather than a parsing-directed, approach because of the former's
06600 power to ignore irrelevant details.
06700 Natural language is not an agreed-on universe of discourse
06800 such as arithmetic wherein symbols have a fixed meaning for everyone
06900 who uses them. What we loosely call "natural language" is actually a
07000 set of history-dependent, selective, and interest-oriented idiolects,
07100 each being unique to the individual with a unique history.(To be
07200 unique does not mean that no property is shared with other
07300 individuals, only that not every property is shared). It is the broad
07400 overlap of idiolects which allows the communication of shared
07500 meanings in everyday conversation.
07600 We took as pragmatic measures of "understanding" the
07700 ability (1) to form a conceptualization so that questions can be
07800 answered and commands carried out, (2) to determine the intention of
07900 the interviewer, (3) to determine the references for pronouns and
08000 other anticipated topics. This straightforward approach to a complex
08100 problem has its drawbacks, as will be shown, but we strove for a
08200 highly individualized idiolect sufficient to demonstrate paranoid
08300 processes of an individual in a particular situation rather than for
08400 a general supra-individual or ideal comprehension of English. If the
08500 language-recognition processes interfered with demonstrating the
08600 paranoid processes, we would consider it defective and insufficient
08700 for our purposes.
08800 The language-recognition process utilized by the model first
08900 puts the teletyped input in the form of a list and then determines
09000 the syntactic type of the input expression - question, statement or
09100 imperative by looking at introductory terms and at punctuation. The
09200 expression-type is then scanned for conceptualizations, i.e.
09300 patterns of contentives consisting of words or word-groups,
09400 stress-forms of speech having conceptual meaning relevant to the
09500 model's interests. The search for conceptualizations ignores (as
09600 irrelevant details) function or closed-class terms (articles,
09700 auxiliaries, conjunctions, prepositions, etc.) except as they might
09800 represent a component in a contentive word-group. For example, the
09900 word-group (for a living) is defined to mean `work' as in "what do
10000 you do for a living?" The conceptualization is classified according
10100 to the rules of Fig. 1 as malevolent, benevolent or neutral. Thus the
10200 language recognizer attempts to judge the intention of the utterer
10300 from the content of the utterance.
10400 (INSERT FIG.1 HERE)
10500 Some special problems a dialogue algorithm must handle in a
10600 psychiatric interview will now be outlined along with a brief
10700 description of how the model deals with them.
10800
10900 .F
11000 QUESTIONS
11100
11200 The principal expression-type used by an interviewer consists
11300 of a question. A question is recognized by its first term being a wh-
11400 or how form and/or the expression ending with a question-mark. In
11500 teletyped interviews a question may sometimes be put in declarative
11600 form followed by a question mark as in:
11700 .V
11800 (1) PT.- I LIKE TO GAMBLE ON THE HORSES.
11900 (2) DR.- YOU GAMBLE?
12000 .END
12100 Although a question-word or auxiliary verb is missing in (2), the
12200 model recognizes that a question is being asked about its gambling
12300 simply by the question mark.
12400 Particularly difficult are those `when' questions which
12500 require a memory which can assign each event a beginning, an end and
12600 a duration. An improved version of the model should have this
12700 capacity. Also troublesome are questions such as `how often', `how
12800 many', i.e. a `how' followed by a quantifier. If the model has "how
12900 often" on its expectancy list while a topic is under discussion, the
13000 appropriate reply can be made. Otherwise the model fails to
13100 understand.
13200 In constructing a simulation of symbolic processes it is
13300 arbitrary how much information to represent in the data-base, Should
13400 the model know what is the capital of Alabama? It is trivial to store
13500 a lot of facts and there always will be boundary conditions. We took
13600 the position that the model should know only what we believed it
13700 reasonable to know relevant to a few hundred topics expectable in a
13800 psychiatric interview. Thus the model performs poorly when subjected
13900 to baiting `exam' questions designed to test its informational
14000 limitations rather than to seek useful psychiatric information.
14100
14200 .F
14300 IMPERATIVES
14400
14500 Typical imperatives in a psychiatric interview consist of
14600 expressions like:
14700 .V
14800 (3) DR.- TELL ME ABOUT YOURSELF.
14900 (4) DR.- LETS DISCUSS YOUR FAMILY.
15000 .END
15100 Such imperatives are actually interrogatives to the
15200 interviewee about the topics they refer to. Since the only physical
15300 action the model can perform is to `talk' , imperatives are treated
15400 as requests for information. They are identified by the common
15500 introductory phrases: "tell me", "lets talk about", etc.
15600 .F
15700 DECLARATIVES
15800
15900 In this category is lumped everything else. It includes
16000 greetings, farewells, yes-no type answers, existence assertions and
16100 the usual predications.
16200
16300 .F
16400 AMBIGUITIES
16500
16600 Words have more than one sense, a convenience for human
16700 memories but a struggle for language-understanding algorithms.
16800 Consider the word "bug" in the following expressions:
16900 .V
17000 (5) AM I BUGGING YOU?
17100 (6) AFTER A PERIOD OF HEAVY DRINKING HAVE YOU FELT BUGS ON
17200 YOUR SKIN?
17300 (7) DO YOU THINK THEY PUT A BUG IN YOUR ROOM?
17400 .END
17500 In expression (5) the term "bug" means to annoy, in (6) it
17600 refers to an insect and in (7) it refers to a microphone used for
17700 hidden surveillence. The model uses context to carry out
17800 disambiguation. For example, when the Mafia is under discussion and
17900 the affect-variable of fear is high, the model interprets "bug" to
18000 mean microphone. In constructing this hypothetical individual we
18100 took advantage of the selective nature of idiolects which can have an
18200 arbitrary restriction on word senses. One characteristic of the
18300 paranoid mode is that no matter in what sense the interviewer uses a
18400 word, the patient may idiosyncratically interpret it in some sense of
18500 his own. This property is obviously of great help for an interactive
18600 simulation with limited language-understanding abilities.
18700 .F
18800 ANAPHORIC REFERENCES
18900 The common anaphoric references consist of the pronouns "it",
19000 "he", "him", "she", "her", "they", "them" as in:
19100 .V
19200 (8) PT.-HORSERACING IS MY HOBBY.
19300 (9) DR.-WHAT DO YOU ENJOY ABOUT IT?
19400 .END
19500 When a topic is introduced by the patient as in (8), a
19600 number of things can be expected to be asked about it. Thus the
19700 algorithm has ready an updated expectancy-anaphora list which allows
19800 it to determine whether the topic introduced by the model is being
19900 responded to or whether the interviewer is continuing with the
20000 previous topic.
20100 The algorithm recognizes "it" in (9) as referring to
20200 "horseracing" because a flag for horseracing was set when horseracing
20300 was introduced in (8), "it" was placed on the expected anaphora list,
20400 and no new topic has been introduced. A more difficult problem arises
20500 when the anaphoric reference points more than one I-O pair back in
20600 the dialogue as in:
20700 .V
20800 (10) PT.-THE MAFIA IS OUT TO GET ME.
20900 (11) DR.- ARE YOU AFRAID OF THEM?
21000 (12) PT.- MAYBE.
21100 (13) DR.- WHY IS THAT?
21200 .END
21300 The "that" of expression (13) does not refer to (12) but to
21400 the topic of being afraid which the interviewer introduced in (11).
21500 Another pronominal confusion occurs when the interviewer uses
21600 `we' in two senses as in:
21700 .V
21800 (14) DR.- WE WANT YOU TO STAY IN THE HOSPITAL.
21900 (15) PT.- I WANT TO BE DISCHARGED NOW.
22000 (16) DR.- WE ARE NOT COMMUNICATING.
22100 .END
22200 In expression (14) the interviewer is using "we" to refer to
22300 psychiatrists or the hospital staff while in (16) the term refers to
22400 the interviewer and patient. Identifying the correct referent would
22500 require beliefs about the dialogue itself.
22600
22700 .F
22800 TOPIC SHIFTS
22900
23000 In the main a psychiatric interviewer is in control of the
23100 interview. When he has gained sufficient information about a topic,
23200 he shifts to a new topic. Naturally the algorithm must detect this
23300 change of topic as in the following:
23400 .V
23500 (17) DR.- HOW DO YOU LIKE THE HOSPITAL?
23600 (18) PT.- ITS NOT HELPING ME TO BE HERE.
23700 (19) DR.- WHAT BROUGHT YOU TO THE HOSPITAL?
23800 (20) PT.- I AM VERY UPSET AND NERVOUS.
23900 (21) DR.- WHAT TENDS TO MAKE YOU NERVOUS?
24000 (23) PT.- JUST BEING AROUND PEOPLE.
24100 (24) DR.- ANYONE IN PARTICULAR?
24200 .END
24300 In (17) and (19) the topic is the hospital. In (21) the topic
24400 changes to causes of the patient's nervous state.
24500 Topics touched upon previously can be re-introduced at any
24600 point in the interview. The model knows that a topic has been
24700 discussed previously because a topic-flag is set when a topic comes
24800 up.
24900
25000 .F
25100 META-REFERENCES
25200
25300 These are references, not about a topic directly, but about
25400 what has been said about the topic as in:
25500 .V
25600 (25) DR.- WHY ARE YOU IN THE HOSPITAL?
25700 (26) PT.- I SHOULDNT BE HERE.
25800 (27) DR.- WHY DO YOU SAY THAT?
25900 .END
26000 The expression (27 ) is about and meta to expression (26 ). The model
26100 does not respond with a reason why it said something but with a
26200 reason for the content of what it said, i.e. it interprets (27) as
26300 "why shouldnt you be here?"
26400 Sometimes when the patient makes a statement, the doctor
26500 replies, not with a question, but with another statement which
26600 constitutes a rejoinder as in:
26700 .V
26800 (28 ) PT.- I HAVE LOST A LOT OF MONEY GAMBLING.
26900 (29 ) DR.- I GAMBLE QUITE A BIT ALSO.
27000 .END
27100 Here the algorithm interprets (29 ) as a directive to
27200 continue discussing gambling, not as an indication to question the
27300 doctor about gambling.
27400
27500 .F
27600 ELLIPSES
27700
27800
27900 In dialogues one finds many ellipses, expressions from which
28000 one or more words are omitted as in:
28100 .V
28200 (30 ) PT.- I SHOULDNT BE HERE.
28300 (31) DR.- WHY NOT?
28400 .END
28500 Here the complete construction must be understood as:
28600 .V
28700 (32) DR.- WHY SHOULD YOU NOT BE HERE?
28800 .END
28900 Again this is handled by the expectancy-anaphora list which
29000 anticipates a "why not".
29100 The opposite of ellipsis is redundancy which usually provides
29200 no problem since the same thing is being said more than once as in:
29300 .V
29400 (33 ) DR.- LET ME ASK YOU A QUESTION.
29500 .END
29600 The model simply recognizes (33) as a stereotyped pattern.
29700
29800 .F
29900 SIGNALS
30000
30100 Some fragmentary expressions serve only as directive signals
30200 to proceed as in:
30300 .V
30400 (34) PT.- I WENT TO THE TRACK LAST WEEK.
30500 (35) DR.- AND?
30600 .END
30700 The fragment of (35) requests a continuation of the story introduced
30800 in (34). The common expressions found in interviews are "and", "so",
30900 "go on", "go ahead", "really", etc. If an input expression cannot be
31000 recognized at all, the lowest level default condition is to assume it
31100 is a signal and either proceed with the next line in a story under
31200 discussion or if the latter is not the case, begin a new story with a
31300 prompting question or statement.
31400
31500 .F
31600 IDIOMS
31700
31800 Since so much of conversational language involves stereotypes
31900 ans special cases, the task of recognition is much easier than that
32000 of linguistic analysis. This is particularly true of idioms. Either
32100 one knows what an idiom means or one does not. It is usually hopeless
32200 to try to decipher what an idiom means from an analysis of its
32300 constituent parts. If the reader doubts this, let him ponder the
32400 following expressions taken from actual teletyped interviews.
32500 .V
32600 (36) DR.- WHATS EATING YOU?
32700 (37) DR.- YOU SOUND KIND OF PISSED OFF.
32800 (38) DR.- WHAT ARE YOU DRIVING AT?
32900 (39) DR.- ARE YOU PUTTING ME ON?
33000 (40) DR.- WHY ARE THEY AFTER YOU?
33100 (41) DR.- HOW DO YOU GET ALONG WITH THE OTHER PATIENTS?
33200 (42) DR.- HOW DO YOU LIKE YOUR WORK?
33300 (43) DR.- HAVE THEY TRIED TO GET EVEN WITH YOU?
33400 (44) DR.- I CANT KEEP UP WITH YOU.
33500 .END
33600 In people, the understanding of idioms is a matter of rote
33700 memory. In an algorithm, idioms can simply be stored as such. As
33800 each new idiom appears in teletyped interviews, its
33900 recognition-pattern is added to the data-base on the inductive
34000 grounds that what happens once can happen again.
34100 Another advantage in constructing an idiolect for a model is
34200 that it recognizes its own idiomatic expressions which tend to be
34300 used by the interviewer (if he understands them) as in:
34400 .V
34500 (45) PT.- THEY ARE OUT TO GET ME.
34600 (46) DR.- WHAT MAKES YOU THINK THEY ARE OUT TO GET YOU.
34700 .END
34800 The expression (45 ) is really a double idiom in which "out"
34900 means `intend' and "get" means `harm' in this context. Needless to
35000 say. an algorithm which tried to pair off the various meanings of
35100 "out" with the various meanings of "get" would have a hard time of
35200 it. But an algorithm which recognizes what it itself is capable of
35300 saying, can easily recognize echoed idioms.
35400
35500 .F
35600 FUZZ TERMS
35700
35800 In this category fall a large number of expressions which, as
35900 non-contentives, have little or no meaning and therefore can be
36000 ignored by the algorithm. The lower-case expressions in the following
36100 are examples of fuzz:
36200 .V
36300 (47) DR.- well now perhaps YOU CAN TELL ME something ABOUT
36400 YOUR FAMILY.
36500 (48) DR.- on the other hand I AM INTERESTED IN YOU.
36600 (49) DR.- hey I ASKED YOU A QUESTION.
36700 .END
36800 The algorithm has "ignoring mechanisms" which allows for for
36900 an `anything' slot in its pattern recognition. Fuzz term are thus
37000 easily ignored and no attempt is made to analyze them.
37100
37200 .F
37300 SUBORDINATE CLAUSES
37400
37500 A subordinate clause is a complete statement inside another
37600 statement. It is most frequently introduced by a relative pronoun,
37700 indicated in the following expressions by lower case:
37800 .V
37900 (50) DR.- WAS IT THE UNDERWORLD that PUT YOU HERE?
38000 (51) DR.- WHO ARE THE PEOPLE who UPSET YOU?
38100 (52) DR.- HAS ANYTHING HAPPENED which YOU DONT UNDERSTAND?
38200 .END
38300 One of the linguistic weaknesses of the model is that it
38400 takes the entire input as a single expression. When the input is
38500 syntactically complex, such as possessing subordinate clauses, the
38600 algorithm can become confused. To avoid this, future versions of the
38700 model will segment the input into shorter and more manageable
38800 patterns in which an optimal selection of emphases and neglect of
38900 irrelevant detail can be achieved while avoiding combinatorial
39000 explosions.
39100 .F
39200 VOCABULARY
39300
39400 How many words should there be in the algorithm's vocabulary?
39500 It is a rare human speaker of English who can recognize 40% of the
39600 415,000 words in the Oxford English Dictionary. In his everyday
39700 conversation an educated person uses perhaps 10,000 words and has a
39800 recognition vocabulary of about 50,000 words. A study of telephone
39900 conversations showed that 96 % of the talk employed only 737 words.
40000 (French, Carter, and Koenig, 1930). Of course if the remaining 4% are
40100 important but unrecognized contentives,the result may be ruinous to
40200 he coherence of a conversation.
40300 In counting all the words in 53 teletyped psychiatric
40400 interviews conducted by psychiatrists, we found only 721 different
40500 words. Since we are familiar with psychiatric vocabularies and
40600 styles of expression, we believed this language-algorithm could
40700 function adequately with a vocabulary of at most a few thousand
40800 contentives. There will always be unrecognized words. The algorithm
40900 must be able to continue even if it does not have a particular word
41000 in its vocabulary. This provision represents one great advantage
41100 of pattern-matching over conventional linguistic parsing. Our
41200 algorithm can guess while a parser must know with certainty in order
41300 to proceed.
41400
41500 .F
41600 MISSPELLINGS AND EXTRA CHARACTERS
41700 There is really no good defense against misspellings in a
41800 teletyped interview except having a human monitor the conversation
41900 and make the necessary corrections. Spelling correcting programs are
42000 slow, inefficient, and imperfect. They experience great problems
42100 when it is the first character in a word which is incorrect.
42200 Extra characters sent over the teletype by the interviewer or
42300 by a bad phone line can be removed by a human monitor since the
42400 output from the interviewer first appears on the monitor's console
42500 and then is typed by her directly to the program.
42600
42700 .F
42800 META VERBS
42900
43000 Certain common verbs such as "think", "feel", "believe", etc
43100 can take a clause as their ojects as in:
43200 .V
43300 (54) DR.- I THINK YOU ARE RIGHT.
43400 (55) DR.- WHY DO YOU FEEL THE GAMBLING IS CROOKED?
43500 .END
43600 The verb "believe" is peculiar since it can also take as
43700 object a noun or noun phrase as in:
43800 .V
43900 (56) DR.- I BELIEVE YOU.
44000 .END
44100 In expression (55) the conjunction "that" can follow the word
44200 "feel" signifying a subordinate clause. This is not the case after
44300 "believe" in expression (56). The model makes the correct
44400 identification in (56) because nothing follows the "you".
44500 .F
44600 ODD WORDS
44700 From extensive experience with teletyped interviews, we
44800 learned the model must have patterns for "odd" words. We term them
44900 such since these are words which are quite natural in the usual
45000 vis-a-vis interview in which the participants communicate through
45100 speech but which are quite odd in the context of a teletyped
45200 interview. This should be clear from the following examples in which
45300 the odd words appear in lower case:
45400 .V
45500 (57) DR.-YOU sound CONFUSED.
45600 (58) DR.- DID YOU hear MY LAST QUESTION?
45700 (59) DR.- WOULD YOU come in AND sit down PLEASE?
45800 (60) DR.- CAN YOU say WHO?
45900 (61) DR.- I WILL see YOU AGAIN TOMORROW.
46000 .END
46100
46200
46300 .F
46400 MISUNDERSTANDING
46500
46600 It is perhaps not fully recognized by students of language
46700 how often people misunderstand one another in conversation and yet
46800 their dialogues proceed as if understanding and being understood had
46900 taken place.
47000 A classic example is the following man-on-the-street interview.
47100 .V
47200 INTERVIEWER - WHAT DO YOU THINK OF MARIHUANA?
47300 MAN - DIRTIEST TOWN IN MEXICO.
47400 INTERVIEWER - HOW ABOUT LSD?
47500 MAN - I VOTED FOR HIM.
47600 INTERVIEWER - HOW DO YOU FEEL ABOUT THE INDIANAPOLIS 500?
47700 MAN - I THINK THEY SHOULD SHOOT EVERY LAST ONE OF THEM.
47800 INTERVIEWER - AND THE VIET CONG POSITION?
47900 MAN - I'M FOR IT, BUT MY WIFE COMPLAINS ABOUT HER ELBOWS.
48000 .END
48100 Sometimes a psychiatric interviewer realizes when
48200 misunderstanding occurs and tries to correct it. Other times he
48300 simply passes it by. It is characteristic of the paranoid mode to
48400 respond idiosyncratically to particular word-concepts regardless of
48500 what the interviewer is saying:
48600 .V
48700 (62) PT.- SOME PEOPLE HERE MAKE ME NERVOUS.
48800 (63) DR.- I BET.
48900 (64) PT.- GAMBLING HAS BEEN NOTHING BUT TROUBLE FOR ME.
49000 .END
49100 Here one word sense of "bet" (to wager) is confused with the offered
49200 sense of expressing agreement. As has been mentioned, this property
49300 of paranoid conversation eases the task of simulation.
49400 .F
49500 UNUNDERSTANDING
49600
49700 A dialogue algorithm must be prepared for situations in which
49800 it simply does not understand. It cannot arrive at any interpretation
49900 as to what the interviewer is saying since no pattern can be matched.
50000 It may recognize the topic but not what is being said about it.
50100 The language-recognizer should not be faulted for a simple
50200 lack of facts as in:
50300 .V
50400 (65) DR.- WHO IS THE PRESIDENT OF TURKEY?
50500 .END CONTINUE
50600 when the data-base does not contain the word
50700 "Turkey". In this default condition it is simplest to reply:
50800 .V
50900 (66) PT.- I DONT KNOW.
51000 .END CONTINUE
51100 and dangerous to reply:
51200 .V
51300 (67) PT.- COULD YOU REPHRASE THE QUESTION?
51400 .END CONTINUE
51500 because of the disastrous loops which can result.
51600 Since the main problem in the default condition of
51700 ununderstanding is how to continue, the model employs heuristics such
51800 as changing the level of the dialogue and asking about the
51900 interviewer's intention as in:
52000 .V
52100 (68) PT.- WHY DO YOU WANT TO KNOW THAT?
52200 .END CONTINUE
52300 or rigidly continuing with a previous topic or introducing a new
52400 topic.
52500 These are admittedly desperate measures intended to prompt
52600 the interviewer in directions the algorithm has a better chance of
52700 understanding. Although it is usually the interviewer who controls
52800 the flow from topic to topic, there are times when control must be
52900 assumed by the algorithm.
53000 There are many additional problems in understanding
53100 conversational language but the above description should be
53200 sufficient to convey some of the complexities involved. Further
53300 examples will be presented in the next chapter in describing the
53400 logic of the central processes of the model.